90 research outputs found

    Decentralised Coordination of Low-Power Embedded Devices Using the Max-Sum Algorithm

    No full text
    This paper considers the problem of performing decentralised coordination of low-power embedded devices (as is required within many environmental sensing and surveillance applications). Specifically, we address the generic problem of maximising social welfare within a group of interacting agents. We propose a novel representation of the problem, as a cyclic bipartite factor graph, composed of variable and function nodes (representing the agents’ states and utilities respectively). We show that such representation allows us to use an extension of the max-sum algorithm to generate approximate solutions to this global optimisation problem through local decentralised message passing. We empirically evaluate this approach on a canonical coordination problem (graph colouring), and benchmark it against state of the art approximate and complete algorithms (DSA and DPOP). We show that our approach is robust to lossy communication, that it generates solutions closer to those of DPOP than DSA is able to, and that it does so with a communication cost (in terms of total messages size) that scales very well with the number of agents in the system (compared to the exponential increase of DPOP). Finally, we describe a hardware implementation of our algorithm operating on low-power Chipcon CC2431 System-on-Chip sensor nodes

    Distributed constraint optimization with structured resource constraints

    Get PDF
    Distributed constraint optimization (DCOP) provides a framework for coordinated decision making by a team of agents. Often, during the decision making, capacity constraints on agents ’ resource consumption must be taken into account. To address such scenarios, an extension of DCOP- Resource Constrained DCOP- has been proposed. However, certain type of resources have an additional structure associated with them and exploiting it can result in more efficient algorithms than possible with a general framework. An example of these are distribution networks, where the flow of a commodity from sources to sinks is limited by the flow capacity of edges. We present a new model of structured resource constraints that exploits the acyclicity and the flow conservation property of distribution networks. We show how this model can be used in efficient algorithms for finding the optimal flow configuration in distribution networks, an essential problem in managing power distribution networks. Experiments demonstrate the efficiency and scalability of our approach on publicly available benchmarks and compare favorably against a specialized solver for this task. Our results extend significantly the effectiveness of distributed constraint optimization for practical multi-agent settings

    A Distributed, Complete Method for Multi-Agent Constraint Optimization

    Get PDF
    We present in this paper a new complete method for distributed constraint optimization. This is a utility-propagation method, inspired by the sum-product algorithm. The original algorithm requires fixed message sizes, linear memory, and is time-linear in the size of the problem. However, it is correct only for tree-shaped constraint networks. In this paper, we show how to extend the algorithm to arbitrary topologies using cycle cutsets, while preserving the linear message size and memory requirements. We present some preliminary experimental results on randomly generated problems. The algorithm is formulated for optimization problems, but can be easily applied to satisfaction problems as well

    Superstabilizing, Fault-containing Multiagent Combinatorial Optimization

    Get PDF
    Self stabilization in distributed systems is the ability of a system to respond to transient failures by eventually reaching a legal state, and maintaining it afterwards. This makes such systems particularly interesting because they can tolerate faults, and are able to cope with dynamic environments. In this paper we propose the first self stabilizing mechanism for multiagent combinatorial optimization, which stabilizes in a state corresponding to the optimal solution of the optimization problem. Our algorithm is based on dynamic programming, and requires a linear number of messages to find the optimal solution in the absence of faults. We show how our algorithm can be made super-stabilizing, in the sense that while transiting from one stable state to the next, our system preserves the assignments from the previous optimal state (similar to a "last-known-good" state), until the new optimal solution is found (without "random" changes to the variables). We offer equal bounds for the stabilization and the superstabilization time. Furthermore, we describe a general scheme for fault containment and fast response time upon low impact failures. Multiple, isolated failures are handled effectively. To show the merits of our approach we report on experiments with practical sized distributed meeting scheduling problems in a multiagent system

    A Scalable Method for Multiagent Constraint Optimization

    Get PDF
    We present in this paper a new complete method for distributed constraint optimization. This is a utility-propagation method, inspired by the sum-product algorithm (Kschischang et al 2001). The original algorithm requires fixed message sizes, linear memory and linear time in the size of the problem. However, it is correct only for tree-shaped constraint networks. In this paper, we show how to extend that algorithm to arbitrary topologies using a pseudotree arrangement of the problem graph. We compare our algorithm with "standard" backtracking algorithms, and present experimental results. For some problem types we report orders of magnitude less messages, and even the ability to deal with arbitrary large problems. Our algorithm is formulated for optimization problems, but can be easily applied to satisfaction problems as well

    Privacy Guarantees through Distributed Constraint Satisfaction

    Get PDF
    The reason for using distributed constraint satisfaction algorithms is often to allow agents to find a solution while revealing as little as possible about their variables and constraints. So far, most algorithms for DisCSP do not guarantee privacy of this information. This paper describes some simple techniques that can be used with DisCSP algorithms such as DPOP, and provide sensible privacy guarantees based on the distributed solving process without sacrificing its efficiency
    • 

    corecore